冷门问题在推荐系统中通常被认可,并通过遵循一般想法来利用温暖用户的丰富互动记录来推断冷用户的偏好。但是,这些解决方案的性能受到温暖用户可用的记录量的限制。因此,基于几个用户的互动记录建立推荐系统仍然是不受欢迎或早期推荐平台的挑战性问题。本文着重于根据两个观察结果解决新闻推荐的几个建议问题。首先,在不同平台(即使是不同语言)的新闻可能会共享类似的主题。其次,对这些主题的用户偏好可以在不同平台上转移。因此,我们建议通过将用户新的偏好从富源源域转移到低资源的目标域来解决几个播放新闻推荐问题。为了用不同的语言桥接两个域,而没有任何重叠的用户和新闻,我们提出了一个新颖的无监督的交叉转移模型,作为在两个域中与语义上相似的新闻保持一致的新闻编码器。用户编码器是在对齐新闻编码的顶部构造的,并将用户偏好从源源转移到目标域。两个现实世界新闻推荐数据集的实验结果表明,与最先进的基线相比,我们提出的方法在解决几乎没有新闻建议方面的出色表现。
translated by 谷歌翻译
在本文中,我们考虑从噪声损坏的$ M $二进制测量恢复$ N $尺寸信号,并在假设目标信号具有低生成内在尺寸,即,目标信号可以通过$ l近似生成。$ -lipschitz生成器$ g:\ mathbb {r} ^ k \ lightarrow \ mathbb {r} ^ {n},k \ ll n $。虽然二进制测量模型是高度非线性的,但我们提出了最小二乘解码器并证明,最多可达$ C $,具有很高的概率,最小二乘解码器实现了急剧估计错误$ \ Mathcal {O}(\ SQRT {只要$ m \ geq \ mathcal {o}(k \ log(ln))$,只要$ m \ geq \ mathcal {o}广泛的数值模拟和具有最先进方法的比较显示了最小的方形解码器对噪声和标志翻转是强大的,如我们的理论所示。通过用正确选择的深度和宽度构造Relu网络,我们验证了(大约)的深生成点,这是独立的兴趣。
translated by 谷歌翻译
本文研究了用于无监督场景的图形神经网络(GNN)的节点表示。具体地,我们推导了理论分析,并在不适当定义的监督信号时,在不同的图形数据集中提供关于GNN的非稳定性能的实证演示。 GNN的性能取决于节点特征平滑度和图形结构的局部性。为了平滑通过图形拓扑和节点功能测量的节点接近度的差异,我们提出了帆 - 一个小说\下划线{s} elf- \下划线{a} u段图对比度\下划线{i} ve \ nignline {l}收入框架,使用两个互补的自蒸馏正则化模块,\ emph {Ie},内部和图间知识蒸馏。我们展示了帆在各种图形应用中的竞争性能。即使使用单个GNN层,Sail也在各种基准数据集中持续竞争或更好的性能,与最先进的基线相比。
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Our work targets at searching feasible adversarial perturbation to attack a classifier with high-dimensional categorical inputs in a domain-agnostic setting. This is intrinsically an NP-hard knapsack problem where the exploration space becomes explosively larger as the feature dimension increases. Without the help of domain knowledge, solving this problem via heuristic method, such as Branch-and-Bound, suffers from exponential complexity, yet can bring arbitrarily bad attack results. We address the challenge via the lens of multi-armed bandit based combinatorial search. Our proposed method, namely FEAT, treats modifying each categorical feature as pulling an arm in multi-armed bandit programming. Our objective is to achieve highly efficient and effective attack using an Orthogonal Matching Pursuit (OMP)-enhanced Upper Confidence Bound (UCB) exploration strategy. Our theoretical analysis bounding the regret gap of FEAT guarantees its practical attack performance. In empirical analysis, we compare FEAT with other state-of-the-art domain-agnostic attack methods over various real-world categorical data sets of different applications. Substantial experimental observations confirm the expected efficiency and attack effectiveness of FEAT applied in different application scenarios. Our work further hints the applicability of FEAT for assessing the adversarial vulnerability of classification systems with high-dimensional categorical inputs.
translated by 谷歌翻译
In a citation graph, adjacent paper nodes share related scientific terms and topics. The graph thus conveys unique structure information of document-level relatedness that can be utilized in the paper summarization task, for exploring beyond the intra-document information. In this work, we focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings. We first propose a Multi-granularity Unsupervised Summarization model (MUS) as a simple and low-cost solution to the task. MUS finetunes a pre-trained encoder model on the citation graph by link prediction tasks. Then, the abstract sentences are extracted from the corresponding paper considering multi-granularity information. Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework. Motivated by this, we next propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available. Apart from employing the link prediction as an auxiliary task, GSS introduces a gated sentence encoder and a graph information fusion module to take advantage of the graph information to polish the sentence representation. Experiments on a public benchmark dataset show that MUS and GSS bring substantial improvements over the prior state-of-the-art model.
translated by 谷歌翻译
Math word problem (MWP) solving is an important task in question answering which requires human-like reasoning ability. Analogical reasoning has long been used in mathematical education, as it enables students to apply common relational structures of mathematical situations to solve new problems. In this paper, we propose to build a novel MWP solver by leveraging analogical MWPs, which advance the solver's generalization ability across different kinds of MWPs. The key idea, named analogy identification, is to associate the analogical MWP pairs in a latent space, i.e., encoding an MWP close to another analogical MWP, while moving away from the non-analogical ones. Moreover, a solution discriminator is integrated into the MWP solver to enhance the association between the representations of MWPs and their true solutions. The evaluation results verify that our proposed analogical learning strategy promotes the performance of MWP-BERT on Math23k over the state-of-the-art model Generate2Rank, with 5 times fewer parameters in the encoder. We also find that our model has a stronger generalization ability in solving difficult MWPs due to the analogical learning from easy MWPs.
translated by 谷歌翻译
Current math word problem (MWP) solvers are usually Seq2Seq models trained by the (one-problem; one-solution) pairs, each of which is made of a problem description and a solution showing reasoning flow to get the correct answer. However, one MWP problem naturally has multiple solution equations. The training of an MWP solver with (one-problem; one-solution) pairs excludes other correct solutions, and thus limits the generalizability of the MWP solver. One feasible solution to this limitation is to augment multiple solutions to a given problem. However, it is difficult to collect diverse and accurate augment solutions through human efforts. In this paper, we design a new training framework for an MWP solver by introducing a solution buffer and a solution discriminator. The buffer includes solutions generated by an MWP solver to encourage the training data diversity. The discriminator controls the quality of buffered solutions to participate in training. Our framework is flexibly applicable to a wide setting of fully, semi-weakly and weakly supervised training for all Seq2Seq MWP solvers. We conduct extensive experiments on a benchmark dataset Math23k and a new dataset named Weak12k, and show that our framework improves the performance of various MWP solvers under different settings by generating correct and diverse solutions.
translated by 谷歌翻译
尽管图神经网络(GNNS)已经证明了它们在处理非欧国人结构数据方面的功效,但由于多跳数据依赖性施加的可伸缩性约束,因此很难将它们部署在实际应用中。现有方法试图通过使用训练有素的GNN的标签训练多层感知器(MLP)来解决此可伸缩性问题。即使可以显着改善MLP的性能,但两个问题仍能阻止MLP的表现优于GNN并在实践中使用:图形结构信息的无知和对节点功能噪声的敏感性。在本文中,我们建议在图(NOSMOG)上学习噪声稳定结构感知的MLP,以克服挑战。具体而言,我们首先将节点内容与位置功能进行补充,以帮助MLP捕获图形结构信息。然后,我们设计了一种新颖的表示相似性蒸馏策略,以将结构节点相似性注入MLP。最后,我们介绍了对抗性功能的扩展,以确保稳定的学习能力噪声,并进一步提高性能。广泛的实验表明,在七个数据集中,NOSMOG在转导和归纳环境中均优于GNN和最先进的方法,同时保持竞争性推理效率。
translated by 谷歌翻译
已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识,并且其中许多基于ALC,即原型和表达的DL或其扩展。探索ALC本体论的主要任务是计算语义范围。符号方法可以保证声音和完整的语义需要,但对不一致和缺失信息敏感。为此,我们提出了一个模糊的ALC本体神经推理器Falcon。 Falcon使用模糊逻辑运算符为任意ALC本体论生成单个模型结构,并使用多个模型结构来计算语义索引。理论结果表明,保证猎鹰是计算ALC本体学语义索引的声音和完整算法。实验结果表明,Falcon不仅可以近似推理(不完整的本体理由)和chanseansissist的推理(因本体不一致的推理),还可以通过结合ALC本体的背景知识来改善生物医学领域的机器学习。
translated by 谷歌翻译